Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Aspect-based sentiment analysis model integrating match-LSTM network and grammatical distance
LIU Hui, MA Xiang, ZHANG Linyu, HE Rujin
Journal of Computer Applications    2023, 43 (1): 45-50.   DOI: 10.11772/j.issn.1001-9081.2021111874
Abstract286)   HTML16)    PDF (1828KB)(150)       Save
Aiming at the problems of the mismatch between aspect words and irrelevant context and the lack of grammatical level features in Aspect-Based Sentiment Analysis (ABSA) at current stage, an improved ABSA model integrating match-Long Short-Term Memory (mLSTM) and grammatical distances was proposed, namely mLSTM-GCN. Firstly, the correlation between the aspect word and the context was calculated word by word, and the obtained attention weight and the context representation were fused as the input of the mLSTM, so that the context representation with higher correlation with the aspect word was obtained. Then, the grammatical distance was introduced to obtain a context which was more grammatically related to the aspect word, so as to obtain more contextual features to guide the modeling of the aspect word, and obtain the aspect representation through the aspect masking layer. Finally, in order to exchange information, location weights, context representations and aspect representations were combined, thereby obtaining the features for sentiment analysis. Experimental results on Twitter, REST14 and LAP14 datasets show that compared with Aspect-Specific Graph Convolutional Network (ASGCN), mLSTM-GCN has the accuracy improved by 1.32, 2.50 and 1.63 percentage points, respectively, and has the Macro-F1 score improved by 2.52, 2.19 and 1.64 percentage points, respectively. Therefore, mLSTM-GCN can effectively reduce the probability of mismatch between aspect words and irrelevant context, and improve the classification effect.
Reference | Related Articles | Metrics
Service composition partitioning method based on process partitioning technology
LIU Huijian, LIU Junsong, WANG Jiawei, XUE Gang
Journal of Computer Applications    2020, 40 (3): 799-805.   DOI: 10.11772/j.issn.1001-9081.2019071290
Abstract329)      PDF (843KB)(279)       Save
In order to solve the bottleneck existed in the central controller of centralized service composition, a method of constructing decentralized service composition based on process partitioning was proposed. Firstly, the business process was modeled by the type directed graph. Then, a grouping algorithm was proposed based on the graph transformation method, and the process model was partitioned according to the grouping algorithm. Finally, the decentralized service composition was constructed according to the partitioning results. Test results show that compared with single thread algorithm, the grouping algorithm has the time-consuming for model 1 reduced by 21.4%, and the decentralized service composition constructed has lower response time and higher throughput. The experimental results show that the proposed method can effectively partition the business processes in the service composition, and the constructed decentralized service composition can improve the service performance.
Reference | Related Articles | Metrics
Task requirement-oriented user selection incentive mechanism in mobile crowdsensing
CHEN Xiuhua, LIU Hui, XIONG Jinbo, MA Rong
Journal of Computer Applications    2019, 39 (8): 2310-2317.   DOI: 10.11772/j.issn.1001-9081.2019010226
Abstract446)      PDF (1328KB)(304)       Save
Most existing incentive mechanisms in mobile crowdsensing are platform-centered design or user-centered design without multidimensional consideration of sensing task requirements. Therefore, it is impossible to make user selection effectively based on sensing tasks and meet the maximization and diversification of the task requirements. To solve these problems, a Task Requirement-oriented user selection Incentive Mechanism (TRIM) was proposed, which is a task-centered design method. Firstly, sensing tasks were published by the sensing platform according to task requirements. Based on multiple dimensions such as task type, spatio-temperal characteristic and sensing reward, the task vectors were constructed to optimally meet the task requirements. To implement the personalized sensing participation, the user vectors were constructed based on the user preferences, individual contribution value, and expected reward by the sensing users. Then, by introducing Privacy-preserving Cosine Similarity Computation protocol (PCSC), the similarities between the sensing tasks and the sensing users were calculated. In order to obtain the target user set, the user selection based on the similarity comparison results was performed by the sensing platform. Therefore, the sensing task requirements were better met and the user privacy was protected. Finally, the simulation experiment indicates that TRIM shortens the computational time overhead of exponential increments and improves the computational efficiency compared with incentive mechanism using Paillier encryption protocol in the matching process between sensing tasks and sensing users; compared with the incentive mechanism using direct PCSC, the proposed TRIM guarantees the privacy of the sensing users and achieves 98% matching accuracy.
Reference | Related Articles | Metrics
Adaptive threshold algorithm based on statistical prediction under spatial crowdsourcing environment
LIU Hui, LI Sheng'en
Journal of Computer Applications    2018, 38 (2): 415-420.   DOI: 10.11772/j.issn.1001-9081.2017071805
Abstract617)      PDF (946KB)(534)       Save
Focusing on the problem that the randomness of task assignment is too high and the utility value is not ideal under the spatial crowdsourcing environment, an adaptive threshold algorithm based on statistical prediction was proposed. Firstly, the numbers of free tasks, free workers and free positions in the crowdsourcing platform in real-time was counted to set the threshold value. Secondly, according to the historical statistical analysis, the distributions of tasks and workers were divided into two balanced parts, then the Min-max normalization method was applied to match each task to a certain worker. Finally, the probability of the appearance of the matched workers was calculated to verify the effectiveness of the task distribution. The experimental results on real data show that, compared with random threshold algorithm and greedy algorithm, the utility value of the proposed algorithm was increased by 7% and 10%, respectively. Experimental result indicates that the proposed adaptive threshold algorithm can reduce the randomness and improve the utility value in the process of task assignment.
Reference | Related Articles | Metrics
Dimension reduction method of brain network state observation matrix based on Spectral Embedding
DAI Zhaokun, LIU Hui, WANG Wenzhe, WANG Yanan
Journal of Computer Applications    2017, 37 (8): 2410-2415.   DOI: 10.11772/j.issn.1001-9081.2017.08.2410
Abstract490)      PDF (1084KB)(580)       Save
As the brain network state observation matrix based on functional Magnetic Resonance Imaging (fMRI) reconstruction is high-dimensional and characterless, a method of dimensionality reduction based on Spectral Embedding was presented. Firstly, the Laplacian matrix was constructed from the similarity measurement between the samples. Secondly, in order to achieve the purpose of mapping (reducing dimension) datasets from high dimension to low dimension, the first two main eigenvectors were selected to construct a two-dimensional eigenvector space through Laplacian matrix factorization. The method was applied to reduce the dimension of the matrix and visualize it in two-dimensional space, and the results were evaluated by category validity indicators. Compared with the dimensionality reduction algorithms such as Principal Component Analysis (PCA), Locally Linear Embedding (LLE), Isometric Mapping (Isomap), the mapping points in the low dimensional space got by the proposed method have obvious category significance. According to the category validity indicators, compared with Multi-Dimensional Scaling (MDS) and t-distributed Stochastic Neighbor Embedding (t-SNE) algorithms, the Di index (the average distance among within-class samples) of the proposed method was decreased by 87.1% and 65.2% respectively, and the Do index (the average distance among between-class samples) of it was increased by 351.3% and 25.5% respectively. Finally, the visualization results of dimensionality reduction show a certain regularity through a number of samples, and the effectiveness and universality of the proposed method are validated.
Reference | Related Articles | Metrics
Application of weighted Fast Newman modularization algorithm in human brain structural network
XIA Yidan, WANG Bin, DONG Yingzhao, LIU Hui, XIONG Xin
Journal of Computer Applications    2016, 36 (12): 3347-3352.   DOI: 10.11772/j.issn.1001-9081.2016.12.3347
Abstract604)      PDF (1026KB)(410)       Save
The binary brain network modularization is not enough to describe physiological features of human brain. In order to solve the problem, a modularization algorithm for weighted brain network based on Fast Newman binary algorithm was presented. Using the hierarchical clustering idea of condensed nodes as the base, a weighted modularity indicator was built with the main bases of single node's weight and entire network's weight. Then the modularity increment was taken as the testing index to decide which two nodes should be combined in weighted brain network and realize module partition. The proposed method was applied to detect the modular structure of the group average data of 60 healthy people. The experiment results showed that, compared with the modular structure of the binary brain network, the brain network modularity of the proposed method was increased by 28% and more significant difference between inside and outside of modules could be revealed. Moreover, the modular structure found by the proposed method is more consistent with the physiological characteristics of human brain. Compared with the other two existing weighted modular algorithms, the proposed method can also slightly improve the modularity and guarantee a reasonable identification for human brain modular structure.
Reference | Related Articles | Metrics
Optimization algorithm based on R-λ model rate control in H.265/HEVC
LIAO Jundong, LIU Licheng, HAO Luguo, LIU Hui
Journal of Computer Applications    2016, 36 (11): 2993-2997.   DOI: 10.11772/j.issn.1001-9081.2016.11.2993
Abstract643)      PDF (910KB)(444)       Save
In order to improve the bit-allocation effect of the Largest Coding Unit (LCU) and the parameter-update precision ( αβ), in the rate control algorithm of H.265/HEVC based R-λ model, an optimized rate control algorithm was proposed. By utilizing the existing encoding basic unit, bit allocation was carried out, and the parameters ( α, β) were updated by using the coding distortion degree. The experimental result shows that in the constant bit rate case, compared to the HM13.0 rate control algorithm, three component PSNR gain improves at least 0.76 dB, the coding transmission bit reduces at least by 0.46%, and the coding time reduces at least by 0.54%.
Reference | Related Articles | Metrics
Hierarchical modeling method based on extensible port technology in real-time field
WANG Bin, CUI Xiaojie, HE Bi, LIU Hui, XU Shenglei, WANG Xiaojun
Journal of Computer Applications    2015, 35 (3): 872-877.   DOI: 10.11772/j.issn.1001-9081.2015.03.872
Abstract462)      PDF (1063KB)(415)       Save

When the Model Driven Development (MDD) method is used in real-time field, it is difficult to describe the whole control system in a single layer completely and clearly. A real-time multi-layer modeling method based on hierarchy theory was presented in this study. The extensible input port and output port were adopted to equip present meta-model technique in real-time field, then the eXtensible Markup Language (XML) was used to describe the ports and the message transfer mechanism based on channel was applied to realize communication between models in mutiple layers. The modeling results for real-time control system show that compared with single layer modeling method, the hierarchical modeling method can effectively support the description of parallel interactions between multiple tasks when using model driven development method in real-time field, as a result it enhances the visibility and reusability of real-time complex system models.

Reference | Related Articles | Metrics
Image restoration algorithm of adaptive weighted encoding and L 1/2 regularization
ZHA Zhiyuan, LIU Hui, SHANG Zhenhong, LI Runxin
Journal of Computer Applications    2015, 35 (3): 835-839.   DOI: 10.11772/j.issn.1001-9081.2015.03.835
Abstract606)      PDF (965KB)(449)       Save

Aiming at the denoising problem in image restoration, an adaptive weighted encoding and L1/2 regularization method was proposed. Firstly, for many real images which have not only Gaussian noise, but have Laplace noise, an Improved L1-L2 Hybrid Error Model (IHEM) method was proposed, which could have the advantages of both L1 norm and L2 norm. Secondly, considering noise distribution change in the iteration process, an adaptive membership degree method was proposed, which could reduce iteration number and computational cost. An adaptive weighted encoding method was applied, which had a perfect effect on solving the noise heavy tail distribution problem. In addition, L1/2 regularization method was proposed, which could get much sparse solution. The experimental results demonstrate that the proposed algorithm can lead to Peak Signal-to-Noise Ratio (PSNR) about 3.5 dB improvement and Structural SIMilarity (SSIM) about 0.02 improvement in average over the IHEM method, and it gets an ideal result to deal with the different noise.

Reference | Related Articles | Metrics
LTE downlink cross-layer scheduling algorithm based on QoS for value-added service
LIU Hui, ZHANG Sailong
Journal of Computer Applications    2015, 35 (2): 336-339.   DOI: 10.11772/j.issn.1001-9081.2015.02.0336
Abstract503)      PDF (622KB)(423)       Save

Aiming at the problem that how to achieve the different value-added service users' rates in the Long Term Evolution (LTE) system, an optimization Proportional Fairness (PF) algorithm was proposed. Considering channel conditions, pay level and satisfaction, this optimization PF algorithm with QoS-aware service's eigenfunction could properly schedule the paying users under the situation that the paid rates could not be achieved. So it could achieve the different paying levels of rates. Simulations were conducted in Matlab environment. In the simulations, the optimization PF algorithm performed better than traditional PF in satisfaction and effective throughput. Compared with traditional PF algorithm, the difference of average satisfaction between paying users was about 26%, and the average effective throughput increased by 17%. The simulation results indicate that, under the premise of QoS in multi-service, the optimization algorithm can achieve the different users' perceived average rates, guarantee the satisfaction among the different paying parties and raise the effective throughput of system.

Reference | Related Articles | Metrics
Frequent closed itemset mining algorithm over uncertain data
LIU Huiting, SHEN Shengxia, ZHAO Peng, YAO Sheng
Journal of Computer Applications    2015, 35 (10): 2911-2914.   DOI: 10.11772/j.issn.1001-9081.2015.10.2911
Abstract404)      PDF (586KB)(388)       Save
Due to the downward closure property over uncertain data, existing solutions of mining all the frequent itemsets may lead an exponential number of results. In order to obtain a reasonable result set with small size, frequent closed itemsets discovering over uncertain data were studied, and a new algorithm called Normal Approximation-based Probabilistic Frequent Closed Itemsets Mining (NA-PFCIM) was proposed. The new method regarded the itemset mining process as a probability distribution function, and mined frequent itemsets by using the normal distribution model which supports large databases and can extract frequent itemsets with a high degree of accuracy. Then, the algorithm adopted the depth-first search strategy to obtain all probabilistic frequent closed itemsets, so as to reduce the search space and avoid redundant computation. Two probabilistic pruning techniques including superset pruning and subset pruning were also used in this method. Finally, the effectiveness and efficiency of the proposed methods were verified by comparing with the Possion distribution based algorithm called A-PFCIM. The experimental results show that NA-PFCIM can decrease the number of extending itemsets and reduce the complexity of calculation, it has better performance than the compared algorithm.
Reference | Related Articles | Metrics
Blowing state recognition of basic oxygen furnace based on feature of flame color texture complexity
LI Pengju, LIU Hui, WANG Bin, WANG Long
Journal of Computer Applications    2015, 35 (1): 283-288.   DOI: 10.11772/j.issn.1001-9081.2015.01.0283
Abstract428)      PDF (881KB)(407)       Save

In the process of converter blowing state recognition based on flame image recognition, flame color texture information is underutilized and state recognition rate still needs to be improved in the existing methods. To deal with this problem, a new converter blowing recognition method based on feature of flame color texture complexity was proposed. Firstly, the flame image was transformed into HSI color space, and was nonuniformly quantified; secondly, the co-occurrence matrix of H component and S component was computed in order to fuse color information of flame images; thirdly, the feature descriptor of flame texture complexity was calculated using color co-occurrence matrix; finally, the Canberra distance was used as similarity criteria to classify and identify blowing state. The experimental results show that in the premise of real-time requirements, the recognition rate of the proposed method is increased by 28.33% and 3.33% respectively, compared with the methods of Gray-level co-occurrence matrix and gray differential statistics.

Reference | Related Articles | Metrics
Information hiding technology based on digital screening and its application in covert secrecy communication
GUO Wei LIU Huiping
Journal of Computer Applications    2014, 34 (9): 2645-2649.   DOI: 10.11772/j.issn.1001-9081.2014.09.2645
Abstract407)      PDF (826KB)(495)       Save

For the confidentiality and capacity problems of modern information communication network environment, an information hiding method based on digital screening technology was proposed, in which the information is embedded into the digital text documents to achieve the purpose of security communication. In this method, the watermark information was hidden into the background shades composed of screen dots, then fused with shades and stochastic Frequency Modulation (FM) screen dots image. Finally the background shades with information embedded were added to the text document as regular elements. The analysis and experimental results indicate that the proposed method has huge information capacity, and can embed the information of 72000 Chinese characters in one A4 size document page. In addition, it has perfect visual effects, good concealment ability, high security level and small file size, thus it can be widely used in the modern network security communication.

Reference | Related Articles | Metrics
Real-time scheduling algorithm for periodic priority exchange
WANG Bin WANG Cong XUE Hao LIU Hui XIONG Xin
Journal of Computer Applications    2014, 34 (3): 668-672.   DOI: 10.11772/j.issn.1001-9081.2014.03.0668
Abstract478)      PDF (782KB)(345)       Save

A static priority scheduling algorithm for periodic priority exchange was proposed to resolve the low-priority task latency problem in real-time multi-task system. In this method, a fixed period of timeslice was defined, and the two independent tasks of different priorities in the multi-task system exchanged their priority levels periodically. Under the precondition that the execution time of the task with higher priority could be guaranteed, the task with lower priority would have more opportunities to perform as soon as possible to shorten its execution delay time. The proposed method can effectively solve the bad real-time performance of low-priority task and improve the whole control capability of real-time multi-task system.

Related Articles | Metrics
Improved foreground detection based on statistical model
QIANG Zhenping LIU Hui SHANG Zhenhong CHEN Xu
Journal of Computer Applications    2013, 33 (06): 1682-1694.   DOI: 10.3724/SP.J.1087.2013.01682
Abstract602)      PDF (912KB)(663)       Save
In this paper, the main idea was to improve the foreground detection method based on statistical model. On one hand, historical maximum probability of which feature vector belongs to background was recorded in the background model, which could improve the matched vectors updating speed and make it blended into the background quickly. On the other hand, a method using spatial feature match was proposed to reduce the shadow effect in the foreground detection. The experimental results show that, compared with the MoG method and Lis statistical model method, the method proposed in this paper has obvious improvement in shadow remove and obscured background restoration of big target object.
Reference | Related Articles | Metrics
Ant colony optimization based on window updating for time series segmentation
LIU Hui-bin HE Zhen-feng
Journal of Computer Applications    2011, 31 (11): 3104-3107.   DOI: 10.3724/SP.J.1087.2011.03104
Abstract1327)      PDF (619KB)(417)       Save
This paper applied a modified algorithm of Ant Colony Optimization (ACO) for time series segmentation, which updated pheromone based on window according to the inner continuity of time series for improving optimization efficiency. The algorithm strengthened pheromone according to series continuity; thus, it can improve the positive feedbacks of ants. Then the positive feedbacks were a help for ants to choose paths in the next cycle. The experiments with true data sets validate that the modified method can accelerate the algorithm's convergence and reduce the segmentation cost to a certain extent.
Related Articles | Metrics
Multi-layer feed-forward neural network approach to case-based reasoning
LI Jian-yang,NI Zhi-wei,LIU Hui-ting
Journal of Computer Applications    2005, 25 (11): 2650-2652.  
Abstract1554)      PDF (566KB)(1109)       Save
 When the case-based reasoning system learns new cases,the size of the case base will increase,and lead to the problem of more retrieving time and low efficiency in case retrieval.Multi-layer feed-forward neural network is a constructive neural network,which can be easily constructed and understood with features of the training examples and has lower complexity of time or space.The covering algorithm of the multi-layer feed-forward neural network has high reorganization of the test set besides recognizing the whole training data,and has been used to classify many difficult problems effectively and rapidly.The case base is partitioned to several sub-case bases by a multi-layer forward neural networks as well as covering algorithm,and a new problem can be retrieved only in a special sub-case base.These can be used to deal with the problem of CBR system performance especially in large scale of case base,which can guarantee the two all,competence and efficiency.
Related Articles | Metrics
An adaptive queue scheduling mechanism for supporting Diffserv
LIU Hui,XIA Han-zhu,LIU Xiang
Journal of Computer Applications    2005, 25 (04): 886-888.   DOI: 10.3724/SP.J.1087.2005.0886
Abstract1271)      PDF (133KB)(1165)       Save

WRR and DWRR in the architecture of DiffSever was discussed. And based on WRR, an adaptive weighted round-robin (AWRR) and how to carry out this algorithm was presented. By this algorithm, different packets have different weight, and schedule packets according to this. So that AWRR not only can obtain the requirement of QoS, but also can according to the node’s condition dynamically assigning the bandwidth and the bandwidth resource sharing.

Related Articles | Metrics